Àá½Ã¸¸ ±â´Ù·Á ÁÖ¼¼¿ä. ·ÎµùÁßÀÔ´Ï´Ù.
KMID : 1022420160080020023
Phonetics and Speech Sciences
2016 Volume.8 No. 2 p.23 ~ p.30
Implementation of CNN in the view of mini-batch DNN training for efficient second order optimization
Song Hwa-Jeon

Jung Ho-Young
Park Jeon-Gue
Abstract
This paper describes some implementation schemes of CNN in view of mini-batch DNN training for efficient second order optimization. This uses same procedure updating parameters of DNN to train parameters of CNN by simply arranging an input image as a sequence of local patches, which is actually equivalent with mini-batch DNN training. Through this conversion, second order optimization providing higher performance can be simply conducted to train the parameters of CNN. In both results of image recognition on MNIST DB and syllable automatic speech recognition, our proposed scheme for CNN implementation shows better performance than one based on DNN.
KEYWORD
automatic speech recognition, DNN, CNN, second order optimization
FullTexts / Linksout information
Listed journal information
ÇмúÁøÈïÀç´Ü(KCI)